Search Results

Documents authored by Zhang, Ziyu


Document
Near-Linear Time and Fixed-Parameter Tractable Algorithms for Tensor Decompositions

Authors: Arvind V. Mahankali, David P. Woodruff, and Ziyu Zhang

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
We study low rank approximation of tensors, focusing on the Tensor Train and Tucker decompositions, as well as approximations with tree tensor networks and general tensor networks. As suggested by hardness results also shown in this work, obtaining (1+ε)-approximation algorithms for rank k tensor train and Tucker decompositions efficiently may be computationally hard for these problems. Therefore, we propose different algorithms that respectively satisfy some of the objectives above while violating some others within a bound, known as bicriteria algorithms. On the one hand, for rank-k tensor train decomposition for tensors with q modes, we give a (1 + ε)-approximation algorithm with a small bicriteria rank (O(qk/ε) up to logarithmic factors) and O(q ⋅ nnz(A)) running time, up to lower order terms. Here nnz(A) denotes the number of non-zero entries in the input tensor A. We also show how to convert the algorithm of [Huber et al., 2017] into a relative error approximation algorithm, but their algorithm necessarily has a running time of O(qr² ⋅ nnz(A)) + n ⋅ poly(qk/ε) when converted to a (1 + ε)-approximation algorithm with bicriteria rank r. Thus, the running time of our algorithm is better by at least a k² factor. To the best of our knowledge, our work is the first to achieve a near-input-sparsity time relative error approximation algorithm for tensor train decomposition. Our key technique is a method for efficiently obtaining subspace embeddings for a matrix which is the flattening of a Tensor Train of q tensors - the number of rows in the subspace embeddings is polynomial in q, thus avoiding the curse of dimensionality. We extend our algorithm to tree tensor networks and tensor networks on arbitrary graphs. Another way of coping with intractability is by looking at fixed-parameter tractable (FPT) algorithms. We give FPT algorithms for the tensor train, Tucker, and Canonical Polyadic (CP) decompositions, which are simpler than the FPT algorithms of [Song et al., 2019], since our algorithms do not make use of polynomial system solvers. Our technique of using an exponential number of Gaussian subspace embeddings with exactly k rows (and thus exponentially small success probability) may be of independent interest.

Cite as

Arvind V. Mahankali, David P. Woodruff, and Ziyu Zhang. Near-Linear Time and Fixed-Parameter Tractable Algorithms for Tensor Decompositions. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 79:1-79:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{mahankali_et_al:LIPIcs.ITCS.2024.79,
  author =	{Mahankali, Arvind V. and Woodruff, David P. and Zhang, Ziyu},
  title =	{{Near-Linear Time and Fixed-Parameter Tractable Algorithms for Tensor Decompositions}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{79:1--79:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.79},
  URN =		{urn:nbn:de:0030-drops-196078},
  doi =		{10.4230/LIPIcs.ITCS.2024.79},
  annote =	{Keywords: Low rank approximation, Sketching algorithms, Tensor decomposition}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail